Kubernetes Documentation

Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (CNCF).


Understand Kubernetes

Learn about Kubernetes and its fundamental concepts.

Try Kubernetes

Follow tutorials to learn how to deploy applications in Kubernetes.

Set up a K8s cluster

Get Kubernetes running based on your resources and needs.

Learn how to use Kubernetes

Look up common tasks and how to perform them using a short sequence of steps.

Look up reference information

Browse terminology, command line syntax, API resource types, and setup tool documentation.

Training

Get certified in Kubernetes and make your cloud native projects successful!

    Download Kubernetes

    Install Kubernetes or upgrade to the newest version.

      About the documentation

      This website contains documentation for the current and previous 4 versions of Kubernetes.

        Last modified August 07, 2025 at 9:34 PM PST: Prepare docs home page for Docsy (710d15e99b)
        Kubernetes Documentation | Kubernetes

        Kubernetes Documentation

        Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (CNCF).


        Understand Kubernetes

        Learn about Kubernetes and its fundamental concepts.

        Try Kubernetes

        Follow tutorials to learn how to deploy applications in Kubernetes.

        Set up a K8s cluster

        Get Kubernetes running based on your resources and needs.

        Learn how to use Kubernetes

        Look up common tasks and how to perform them using a short sequence of steps.

        Look up reference information

        Browse terminology, command line syntax, API resource types, and setup tool documentation.

        Training

        Get certified in Kubernetes and make your cloud native projects successful!

          Download Kubernetes

          Install Kubernetes or upgrade to the newest version.

            About the documentation

            This website contains documentation for the current and previous 4 versions of Kubernetes.

              Last modified August 07, 2025 at 9:34 PM PST: Prepare docs home page for Docsy (710d15e99b)
              ClusterTrustBundle v1beta1 | Kubernetes

              ClusterTrustBundle v1beta1

              ClusterTrustBundle is a cluster-scoped container for X.

              apiVersion: certificates.k8s.io/v1beta1

              import "k8s.io/api/certificates/v1beta1"

              ClusterTrustBundle

              ClusterTrustBundle is a cluster-scoped container for X.509 trust anchors (root certificates).

              ClusterTrustBundle objects are considered to be readable by any authenticated user in the cluster, because they can be mounted by pods using the clusterTrustBundle projection. All service accounts have read access to ClusterTrustBundles by default. Users who only have namespace-level access to a cluster can read ClusterTrustBundles by impersonating a serviceaccount that they have access to.

              It can be optionally associated with a particular assigner, in which case it contains one valid set of trust anchors for that signer. Signers may have multiple associated ClusterTrustBundles; each is an independent set of trust anchors for that signer. Admission control is used to enforce that only users with permissions on the signer can create or modify the corresponding bundle.


              • apiVersion: certificates.k8s.io/v1beta1

              • kind: ClusterTrustBundle

              • metadata (ObjectMeta)

                metadata contains the object metadata.

              • spec (ClusterTrustBundleSpec), required

                spec contains the signer (if any) and trust anchors.

              ClusterTrustBundleSpec

              ClusterTrustBundleSpec contains the signer and trust anchors.


              • trustBundle (string), required

                trustBundle contains the individual X.509 trust anchors for this bundle, as PEM bundle of PEM-wrapped, DER-formatted X.509 certificates.

                The data must consist only of PEM certificate blocks that parse as valid X.509 certificates. Each certificate must include a basic constraints extension with the CA bit set. The API server will reject objects that contain duplicate certificates, or that use PEM block headers.

                Users of ClusterTrustBundles, including Kubelet, are free to reorder and deduplicate certificate blocks in this file according to their own logic, as well as to drop PEM block headers and inter-block data.

              • signerName (string)

                signerName indicates the associated signer, if any.

                In order to create or update a ClusterTrustBundle that sets signerName, you must have the following cluster-scoped permission: group=certificates.k8s.io resource=signers resourceName=<the signer name> verb=attest.

                If signerName is not empty, then the ClusterTrustBundle object must be named with the signer name as a prefix (translating slashes to colons). For example, for the signer name example.com/foo, valid ClusterTrustBundle object names include example.com:foo:abc and example.com:foo:v1.

                If signerName is empty, then the ClusterTrustBundle object's name must not have such a prefix.

                List/watch requests for ClusterTrustBundles can filter on this field using a spec.signerName=NAME field selector.

              ClusterTrustBundleList

              ClusterTrustBundleList is a collection of ClusterTrustBundle objects


              • apiVersion: certificates.k8s.io/v1beta1

              • kind: ClusterTrustBundleList

              • metadata (ListMeta)

                metadata contains the list metadata.

              • items ([]ClusterTrustBundle), required

                items is a collection of ClusterTrustBundle objects

              Operations


              get read the specified ClusterTrustBundle

              HTTP Request

              GET /apis/certificates.k8s.io/v1beta1/clustertrustbundles/{name}

              Parameters

              • name (in path): string, required

                name of the ClusterTrustBundle

              • pretty (in query): string

                pretty

              Response

              200 (ClusterTrustBundle): OK

              401: Unauthorized

              list list or watch objects of kind ClusterTrustBundle

              HTTP Request

              GET /apis/certificates.k8s.io/v1beta1/clustertrustbundles

              Parameters

              Response

              200 (ClusterTrustBundleList): OK

              401: Unauthorized

              create create a ClusterTrustBundle

              HTTP Request

              POST /apis/certificates.k8s.io/v1beta1/clustertrustbundles

              Parameters

              Response

              200 (ClusterTrustBundle): OK

              201 (ClusterTrustBundle): Created

              202 (ClusterTrustBundle): Accepted

              401: Unauthorized

              update replace the specified ClusterTrustBundle

              HTTP Request

              PUT /apis/certificates.k8s.io/v1beta1/clustertrustbundles/{name}

              Parameters

              Response

              200 (ClusterTrustBundle): OK

              201 (ClusterTrustBundle): Created

              401: Unauthorized

              patch partially update the specified ClusterTrustBundle

              HTTP Request

              PATCH /apis/certificates.k8s.io/v1beta1/clustertrustbundles/{name}

              Parameters

              • name (in path): string, required

                name of the ClusterTrustBundle

              • body: Patch, required

              • dryRun (in query): string

                dryRun

              • fieldManager (in query): string

                fieldManager

              • fieldValidation (in query): string

                fieldValidation

              • force (in query): boolean

                force

              • pretty (in query): string

                pretty

              Response

              200 (ClusterTrustBundle): OK

              201 (ClusterTrustBundle): Created

              401: Unauthorized

              delete delete a ClusterTrustBundle

              HTTP Request

              DELETE /apis/certificates.k8s.io/v1beta1/clustertrustbundles/{name}

              Parameters

              Response

              200 (Status): OK

              202 (Status): Accepted

              401: Unauthorized

              deletecollection delete collection of ClusterTrustBundle

              HTTP Request

              DELETE /apis/certificates.k8s.io/v1beta1/clustertrustbundles

              Parameters

              Response

              200 (Status): OK

              401: Unauthorized

              This page is automatically generated.

              If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.

              Last modified April 24, 2025 at 9:14 AM PST: Markdown API reference for v1.33 (b84ec30bbb)
              ClusterRole | Kubernetes

              ClusterRole

              ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.

              apiVersion: rbac.authorization.k8s.io/v1

              import "k8s.io/api/rbac/v1"

              ClusterRole

              ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.


              • apiVersion: rbac.authorization.k8s.io/v1

              • kind: ClusterRole

              • metadata (ObjectMeta)

                Standard object's metadata.

              • aggregationRule (AggregationRule)

                AggregationRule is an optional field that describes how to build the Rules for this ClusterRole. If AggregationRule is set, then the Rules are controller managed and direct changes to Rules will be stomped by the controller.

                AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole

                • aggregationRule.clusterRoleSelectors ([]LabelSelector)

                  Atomic: will be replaced during a merge

                  ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added

              • rules ([]PolicyRule)

                Atomic: will be replaced during a merge

                Rules holds all the PolicyRules for this ClusterRole

                PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to.

                • rules.apiGroups ([]string)

                  Atomic: will be replaced during a merge

                  APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups.

                • rules.resources ([]string)

                  Atomic: will be replaced during a merge

                  Resources is a list of resources this rule applies to. '*' represents all resources.

                • rules.verbs ([]string), required

                  Atomic: will be replaced during a merge

                  Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs.

                • rules.resourceNames ([]string)

                  Atomic: will be replaced during a merge

                  ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed.

                • rules.nonResourceURLs ([]string)

                  Atomic: will be replaced during a merge

                  NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both.

              ClusterRoleList

              ClusterRoleList is a collection of ClusterRoles


              • apiVersion: rbac.authorization.k8s.io/v1

              • kind: ClusterRoleList

              • metadata (ListMeta)

                Standard object's metadata.

              • items ([]ClusterRole), required

                Items is a list of ClusterRoles

              Operations


              get read the specified ClusterRole

              HTTP Request

              GET /apis/rbac.authorization.k8s.io/v1/clusterroles/{name}

              Parameters

              • name (in path): string, required

                name of the ClusterRole

              • pretty (in query): string

                pretty

              Response

              200 (ClusterRole): OK

              401: Unauthorized

              list list or watch objects of kind ClusterRole

              HTTP Request

              GET /apis/rbac.authorization.k8s.io/v1/clusterroles

              Parameters

              Response

              200 (ClusterRoleList): OK

              401: Unauthorized

              create create a ClusterRole

              HTTP Request

              POST /apis/rbac.authorization.k8s.io/v1/clusterroles

              Parameters

              Response

              200 (ClusterRole): OK

              201 (ClusterRole): Created

              202 (ClusterRole): Accepted

              401: Unauthorized

              update replace the specified ClusterRole

              HTTP Request

              PUT /apis/rbac.authorization.k8s.io/v1/clusterroles/{name}

              Parameters

              Response

              200 (ClusterRole): OK

              201 (ClusterRole): Created

              401: Unauthorized

              patch partially update the specified ClusterRole

              HTTP Request

              PATCH /apis/rbac.authorization.k8s.io/v1/clusterroles/{name}

              Parameters

              • name (in path): string, required

                name of the ClusterRole

              • body: Patch, required

              • dryRun (in query): string

                dryRun

              • fieldManager (in query): string

                fieldManager

              • fieldValidation (in query): string

                fieldValidation

              • force (in query): boolean

                force

              • pretty (in query): string

                pretty

              Response

              200 (ClusterRole): OK

              201 (ClusterRole): Created

              401: Unauthorized

              delete delete a ClusterRole

              HTTP Request

              DELETE /apis/rbac.authorization.k8s.io/v1/clusterroles/{name}

              Parameters

              Response

              200 (Status): OK

              202 (Status): Accepted

              401: Unauthorized

              deletecollection delete collection of ClusterRole

              HTTP Request

              DELETE /apis/rbac.authorization.k8s.io/v1/clusterroles

              Parameters

              Response

              200 (Status): OK

              401: Unauthorized

              This page is automatically generated.

              If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.

              Last modified April 09, 2025 at 6:36 PM PST: Update API reference docs for v1.32 (a3b579d035)
              Kubernetes Documentation | Kubernetes

              Kubernetes Documentation

              Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerized applications. The open source project is hosted by the Cloud Native Computing Foundation (CNCF).


              Understand Kubernetes

              Learn about Kubernetes and its fundamental concepts.

              Try Kubernetes

              Follow tutorials to learn how to deploy applications in Kubernetes.

              Set up a K8s cluster

              Get Kubernetes running based on your resources and needs.

              Learn how to use Kubernetes

              Look up common tasks and how to perform them using a short sequence of steps.

              Look up reference information

              Browse terminology, command line syntax, API resource types, and setup tool documentation.

              Training

              Get certified in Kubernetes and make your cloud native projects successful!

                Download Kubernetes

                Install Kubernetes or upgrade to the newest version.

                  About the documentation

                  This website contains documentation for the current and previous 4 versions of Kubernetes.

                    Last modified August 07, 2025 at 9:34 PM PST: Prepare docs home page for Docsy (710d15e99b)
                    kubectl create ingress | Kubernetes

                    kubectl create ingress

                    Synopsis

                    Create an ingress with the specified name.

                    kubectl create ingress NAME --rule=host/path=service:port[,tls[=secret]] 
                    

                    Examples

                      # Create a single ingress called 'simple' that directs requests to foo.com/bar to svc
                      # svc1:8080 with a TLS secret "my-cert"
                      kubectl create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert"
                      
                      # Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress"
                      kubectl create ingress catch-all --class=otheringress --rule="/path=svc:port"
                      
                      # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2
                      kubectl create ingress annotated --class=default --rule="foo.com/bar=svc:port" \
                      --annotation ingress.annotation1=foo \
                      --annotation ingress.annotation2=bla
                      
                      # Create an ingress with the same host and multiple paths
                      kubectl create ingress multipath --class=default \
                      --rule="foo.com/=svc:port" \
                      --rule="foo.com/admin/=svcadmin:portadmin"
                      
                      # Create an ingress with multiple hosts and the pathType as Prefix
                      kubectl create ingress ingress1 --class=default \
                      --rule="foo.com/path*=svc:8080" \
                      --rule="bar.com/admin*=svc2:http"
                      
                      # Create an ingress with TLS enabled using the default ingress certificate and different path types
                      kubectl create ingress ingtls --class=default \
                      --rule="foo.com/=svc:https,tls" \
                      --rule="foo.com/path/subpath*=othersvc:8080"
                      
                      # Create an ingress with TLS enabled using a specific secret and pathType as Prefix
                      kubectl create ingress ingsecret --class=default \
                      --rule="foo.com/*=svc:8080,tls=secret1"
                      
                      # Create an ingress with a default backend
                      kubectl create ingress ingdefault --class=default \
                      --default-backend=defaultsvc:http \
                      --rule="foo.com/*=svc:8080,tls=secret1"
                    

                    Options

                    --allow-missing-template-keys     Default: true

                    If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.

                    --annotation strings

                    Annotation to insert in the ingress object, in the format annotation=value

                    --class string

                    Ingress Class to be used

                    --default-backend string

                    Default service for backend, in format of svcname:port

                    --dry-run string[="unchanged"]     Default: "none"

                    Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.

                    --field-manager string     Default: "kubectl-create"

                    Name of the manager used to track field ownership.

                    -h, --help

                    help for ingress

                    -o, --output string

                    Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).

                    --rule strings

                    Rule in format host/path=service:port[,tls=secretname]. Paths containing the leading character '*' are considered pathType=Prefix. tls argument is optional.

                    --save-config

                    If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future.

                    --show-managed-fields

                    If true, keep the managedFields when printing objects in JSON or YAML format.

                    --template string

                    Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

                    --validate string[="strict"]     Default: "strict"

                    Must be one of: strict (or true), warn, ignore (or false). "true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. "warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. "false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.

                    Parent Options Inherited

                    --as string

                    Username to impersonate for the operation. User could be a regular user or a service account in a namespace.

                    --as-group strings

                    Group to impersonate for the operation, this flag can be repeated to specify multiple groups.

                    --as-uid string

                    UID to impersonate for the operation.

                    --cache-dir string     Default: "$HOME/.kube/cache"

                    Default cache directory

                    --certificate-authority string

                    Path to a cert file for the certificate authority

                    --client-certificate string

                    Path to a client certificate file for TLS

                    --client-key string

                    Path to a client key file for TLS

                    --cluster string

                    The name of the kubeconfig cluster to use

                    --context string

                    The name of the kubeconfig context to use

                    --disable-compression

                    If true, opt-out of response compression for all requests to the server

                    --insecure-skip-tls-verify

                    If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure

                    --kubeconfig string

                    Path to the kubeconfig file to use for CLI requests.

                    --kuberc string

                    Path to the kuberc file to use for preferences. This can be disabled by exporting KUBECTL_KUBERC=false feature gate or turning off the feature KUBERC=off.

                    --match-server-version

                    Require server version to match client version

                    -n, --namespace string

                    If present, the namespace scope for this CLI request

                    --password string

                    Password for basic authentication to the API server

                    --profile string     Default: "none"

                    Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)

                    --profile-output string     Default: "profile.pprof"

                    Name of the file to write the profile to

                    --request-timeout string     Default: "0"

                    The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.

                    -s, --server string

                    The address and port of the Kubernetes API server

                    --storage-driver-buffer-duration duration     Default: 1m0s

                    Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction

                    --storage-driver-db string     Default: "cadvisor"

                    database name

                    --storage-driver-host string     Default: "localhost:8086"

                    database host:port

                    --storage-driver-password string     Default: "root"

                    database password

                    --storage-driver-secure

                    use secure connection with database

                    --storage-driver-table string     Default: "stats"

                    table name

                    --storage-driver-user string     Default: "root"

                    database username

                    --tls-server-name string

                    Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used

                    --token string

                    Bearer token for authentication to the API server

                    --user string

                    The name of the kubeconfig user to use

                    --username string

                    Username for basic authentication to the API server

                    --version version[=true]

                    --version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version

                    --warnings-as-errors

                    Treat warnings received from the server as errors and exit with a non-zero exit code

                    See Also

                    This page is automatically generated.

                    If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.

                    Last modified September 04, 2025 at 3:30 PM PST: Update kubectl reference for v1.34 (bdc4bba2a5)
                    Developing and debugging services locally using telepresence | Kubernetes

                    Developing and debugging services locally using telepresence

                    Kubernetes applications usually consist of multiple, separate services, each running in its own container. Developing and debugging these services on a remote Kubernetes cluster can be cumbersome, requiring you to get a shell on a running container in order to run debugging tools.

                    telepresence is a tool to ease the process of developing and debugging services locally while proxying the service to a remote Kubernetes cluster. Using telepresence allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.

                    This document describes using telepresence to develop and debug services running on a remote cluster locally.

                    Before you begin

                    • Kubernetes cluster is installed
                    • kubectl is configured to communicate with the cluster
                    • Telepresence is installed

                    Connecting your local machine to a remote Kubernetes cluster

                    After installing telepresence, run telepresence connect to launch its Daemon and connect your local workstation to the cluster.

                    $ telepresence connect
                     
                    Launching Telepresence Daemon
                    ...
                    Connected to context default (https://<cluster public IP>)
                    

                    You can curl services using the Kubernetes syntax e.g. curl -ik https://kubernetes.default

                    Developing or debugging an existing service

                    When developing an application on Kubernetes, you typically program or debug a single service. The service might require access to other services for testing and debugging. One option is to use the continuous deployment pipeline, but even the fastest deployment pipeline introduces a delay in the program or debug cycle.

                    Use the telepresence intercept $SERVICE_NAME --port $LOCAL_PORT:$REMOTE_PORT command to create an "intercept" for rerouting remote service traffic.

                    Where:

                    • $SERVICE_NAME is the name of your local service
                    • $LOCAL_PORT is the port that your service is running on your local workstation
                    • And $REMOTE_PORT is the port your service listens to in the cluster

                    Running this command tells Telepresence to send remote traffic to your local service instead of the service in the remote Kubernetes cluster. Make edits to your service source code locally, save, and see the corresponding changes when accessing your remote application take effect immediately. You can also run your local service using a debugger or any other local development tool.

                    How does Telepresence work?

                    Telepresence installs a traffic-agent sidecar next to your existing application's container running in the remote cluster. It then captures all traffic requests going into the Pod, and instead of forwarding this to the application in the remote cluster, it routes all traffic (when you create a global intercept or a subset of the traffic (when you create a personal intercept) to your local development environment.

                    What's next

                    If you're interested in a hands-on tutorial, check out this tutorial that walks through locally developing the Guestbook application on Google Kubernetes Engine.

                    For further reading, visit the Telepresence website.

                    Items on this page refer to third party products or projects that provide functionality required by Kubernetes. The Kubernetes project authors aren't responsible for those third-party products or projects. See the CNCF website guidelines for more details.

                    You should read the content guide before proposing a change that adds an extra third-party link.

                    Last modified November 24, 2023 at 4:55 PM PST: Solves issue: #44034 (802dde6897)
                    Setup tools | Kubernetes

                    Setup tools

                    Last modified November 04, 2022 at 11:37 AM PST: Updates page weights in reference docs section (98f310ab58)
                    Objects In Kubernetes | Kubernetes

                    Objects In Kubernetes

                    Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Learn about the Kubernetes object model and how to work with these objects.

                    This page explains how Kubernetes objects are represented in the Kubernetes API, and how you can express them in .yaml format.

                    Understanding Kubernetes objects

                    Kubernetes objects are persistent entities in the Kubernetes system. Kubernetes uses these entities to represent the state of your cluster. Specifically, they can describe:

                    • What containerized applications are running (and on which nodes)
                    • The resources available to those applications
                    • The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance

                    A Kubernetes object is a "record of intent"--once you create the object, the Kubernetes system will constantly work to ensure that the object exists. By creating an object, you're effectively telling the Kubernetes system what you want your cluster's workload to look like; this is your cluster's desired state.

                    To work with Kubernetes objects—whether to create, modify, or delete them—you'll need to use the Kubernetes API. When you use the kubectl command-line interface, for example, the CLI makes the necessary Kubernetes API calls for you. You can also use the Kubernetes API directly in your own programs using one of the Client Libraries.

                    Object spec and status

                    Almost every Kubernetes object includes two nested object fields that govern the object's configuration: the object spec and the object status. For objects that have a spec, you have to set this when you create the object, providing a description of the characteristics you want the resource to have: its desired state.

                    The status describes the current state of the object, supplied and updated by the Kubernetes system and its components. The Kubernetes control plane continually and actively manages every object's actual state to match the desired state you supplied.

                    For example: in Kubernetes, a Deployment is an object that can represent an application running on your cluster. When you create the Deployment, you might set the Deployment spec to specify that you want three replicas of the application to be running. The Kubernetes system reads the Deployment spec and starts three instances of your desired application--updating the status to match your spec. If any of those instances should fail (a status change), the Kubernetes system responds to the difference between spec and status by making a correction--in this case, starting a replacement instance.

                    For more information on the object spec, status, and metadata, see the Kubernetes API Conventions.

                    Describing a Kubernetes object

                    When you create an object in Kubernetes, you must provide the object spec that describes its desired state, as well as some basic information about the object (such as a name). When you use the Kubernetes API to create the object (either directly or via kubectl), that API request must include that information as JSON in the request body. Most often, you provide the information to kubectl in a file known as a manifest. By convention, manifests are YAML (you could also use JSON format). Tools such as kubectl convert the information from a manifest into JSON or another supported serialization format when making the API request over HTTP.

                    Here's an example manifest that shows the required fields and object spec for a Kubernetes Deployment:

                    apiVersion: apps/v1
                    kind: Deployment
                    metadata:
                      name: nginx-deployment
                    spec:
                      selector:
                        matchLabels:
                          app: nginx
                      replicas: 2 # tells deployment to run 2 pods matching the template
                      template:
                        metadata:
                          labels:
                            app: nginx
                        spec:
                          containers:
                          - name: nginx
                            image: nginx:1.14.2
                            ports:
                            - containerPort: 80
                    

                    One way to create a Deployment using a manifest file like the one above is to use the kubectl apply command in the kubectl command-line interface, passing the .yaml file as an argument. Here's an example:

                    kubectl apply -f https://k8s.io/examples/application/deployment.yaml
                    

                    The output is similar to this:

                    deployment.apps/nginx-deployment created
                    

                    Required fields

                    In the manifest (YAML or JSON file) for the Kubernetes object you want to create, you'll need to set values for the following fields:

                    • apiVersion - Which version of the Kubernetes API you're using to create this object
                    • kind - What kind of object you want to create
                    • metadata - Data that helps uniquely identify the object, including a name string, UID, and optional namespace
                    • spec - What state you desire for the object

                    The precise format of the object spec is different for every Kubernetes object, and contains nested fields specific to that object. The Kubernetes API Reference can help you find the spec format for all of the objects you can create using Kubernetes.

                    For example, see the spec field for the Pod API reference. For each Pod, the .spec field specifies the pod and its desired state (such as the container image name for each container within that pod). Another example of an object specification is the spec field for the StatefulSet API. For StatefulSet, the .spec field specifies the StatefulSet and its desired state. Within the .spec of a StatefulSet is a template for Pod objects. That template describes Pods that the StatefulSet controller will create in order to satisfy the StatefulSet specification. Different kinds of objects can also have different .status; again, the API reference pages detail the structure of that .status field, and its content for each different type of object.

                    Server side field validation

                    Starting with Kubernetes v1.25, the API server offers server side field validation that detects unrecognized or duplicate fields in an object. It provides all the functionality of kubectl --validate on the server side.

                    The kubectl tool uses the --validate flag to set the level of field validation. It accepts the values ignore, warn, and strict while also accepting the values true (equivalent to strict) and false (equivalent to ignore). The default validation setting for kubectl is --validate=true.

                    Strict
                    Strict field validation, errors on validation failure
                    Warn
                    Field validation is performed, but errors are exposed as warnings rather than failing the request
                    Ignore
                    No server side field validation is performed

                    When kubectl cannot connect to an API server that supports field validation it will fall back to using client-side validation. Kubernetes 1.27 and later versions always offer field validation; older Kubernetes releases might not. If your cluster is older than v1.27, check the documentation for your version of Kubernetes.

                    What's next

                    If you're new to Kubernetes, read more about the following:

                    Kubernetes Object Management explains how to use kubectl to manage objects. You might need to install kubectl if you don't already have it available.

                    To learn about the Kubernetes API in general, visit:

                    To learn about objects in Kubernetes in more depth, read other pages in this section:

                    Last modified August 25, 2024 at 8:24 PM PST: Reorder overview pages (42da717f16)
                    The Kubernetes API | Kubernetes

                    The Kubernetes API

                    The Kubernetes API lets you query and manipulate the state of objects in Kubernetes. The core of Kubernetes' control plane is the API server and the HTTP API that it exposes. Users, the different parts of your cluster, and external components all communicate with one another through the API server.

                    The core of Kubernetes' control plane is the API server. The API server exposes an HTTP API that lets end users, different parts of your cluster, and external components communicate with one another.

                    The Kubernetes API lets you query and manipulate the state of API objects in Kubernetes (for example: Pods, Namespaces, ConfigMaps, and Events).

                    Most operations can be performed through the kubectl command-line interface or other command-line tools, such as kubeadm, which in turn use the API. However, you can also access the API directly using REST calls. Kubernetes provides a set of client libraries for those looking to write applications using the Kubernetes API.

                    Each Kubernetes cluster publishes the specification of the APIs that the cluster serves. There are two mechanisms that Kubernetes uses to publish these API specifications; both are useful to enable automatic interoperability. For example, the kubectl tool fetches and caches the API specification for enabling command-line completion and other features. The two supported mechanisms are as follows:

                    • The Discovery API provides information about the Kubernetes APIs: API names, resources, versions, and supported operations. This is a Kubernetes specific term as it is a separate API from the Kubernetes OpenAPI. It is intended to be a brief summary of the available resources and it does not detail specific schema for the resources. For reference about resource schemas, please refer to the OpenAPI document.

                    • The Kubernetes OpenAPI Document provides (full) OpenAPI v2.0 and 3.0 schemas for all Kubernetes API endpoints. The OpenAPI v3 is the preferred method for accessing OpenAPI as it provides a more comprehensive and accurate view of the API. It includes all the available API paths, as well as all resources consumed and produced for every operations on every endpoints. It also includes any extensibility components that a cluster supports. The data is a complete specification and is significantly larger than that from the Discovery API.

                    Discovery API

                    Kubernetes publishes a list of all group versions and resources supported via the Discovery API. This includes the following for each resource:

                    • Name
                    • Cluster or namespaced scope
                    • Endpoint URL and supported verbs
                    • Alternative names
                    • Group, version, kind

                    The API is available in both aggregated and unaggregated form. The aggregated discovery serves two endpoints, while the unaggregated discovery serves a separate endpoint for each group version.

                    Aggregated discovery

                    FEATURE STATE: Kubernetes v1.30 [stable](enabled by default)

                    Kubernetes offers stable support for aggregated discovery, publishing all resources supported by a cluster through two endpoints (/api and /apis). Requesting this endpoint drastically reduces the number of requests sent to fetch the discovery data from the cluster. You can access the data by requesting the respective endpoints with an Accept header indicating the aggregated discovery resource: Accept: application/json;v=v2;g=apidiscovery.k8s.io;as=APIGroupDiscoveryList.

                    Without indicating the resource type using the Accept header, the default response for the /api and /apis endpoint is an unaggregated discovery document.

                    The discovery document for the built-in resources can be found in the Kubernetes GitHub repository. This Github document can be used as a reference of the base set of the available resources if a Kubernetes cluster is not available to query.

                    The endpoint also supports ETag and protobuf encoding.

                    Unaggregated discovery

                    Without discovery aggregation, discovery is published in levels, with the root endpoints publishing discovery information for downstream documents.

                    A list of all group versions supported by a cluster is published at the /api and /apis endpoints. Example:

                    {
                      "kind": "APIGroupList",
                      "apiVersion": "v1",
                      "groups": [
                        {
                          "name": "apiregistration.k8s.io",
                          "versions": [
                            {
                              "groupVersion": "apiregistration.k8s.io/v1",
                              "version": "v1"
                            }
                          ],
                          "preferredVersion": {
                            "groupVersion": "apiregistration.k8s.io/v1",
                            "version": "v1"
                          }
                        },
                        {
                          "name": "apps",
                          "versions": [
                            {
                              "groupVersion": "apps/v1",
                              "version": "v1"
                            }
                          ],
                          "preferredVersion": {
                            "groupVersion": "apps/v1",
                            "version": "v1"
                          }
                        },
                        ...
                    }
                    

                    Additional requests are needed to obtain the discovery document for each group version at /apis/<group>/<version> (for example: /apis/rbac.authorization.k8s.io/v1alpha1), which advertises the list of resources served under a particular group version. These endpoints are used by kubectl to fetch the list of resources supported by a cluster.

                    OpenAPI interface definition

                    For details about the OpenAPI specifications, see the OpenAPI documentation.

                    Kubernetes serves both OpenAPI v2.0 and OpenAPI v3.0. OpenAPI v3 is the preferred method of accessing the OpenAPI because it offers a more comprehensive (lossless) representation of Kubernetes resources. Due to limitations of OpenAPI version 2, certain fields are dropped from the published OpenAPI including but not limited to default, nullable, oneOf.

                    OpenAPI V2

                    The Kubernetes API server serves an aggregated OpenAPI v2 spec via the /openapi/v2 endpoint. You can request the response format using request headers as follows:

                    Valid request header values for OpenAPI v2 queries
                    HeaderPossible valuesNotes
                    Accept-Encodinggzipnot supplying this header is also acceptable
                    Acceptapplication/com.github.proto-openapi.spec.v2@v1.0+protobufmainly for intra-cluster use
                    application/jsondefault
                    *serves application/json

                    OpenAPI V3

                    FEATURE STATE: Kubernetes v1.27 [stable](enabled by default)

                    Kubernetes supports publishing a description of its APIs as OpenAPI v3.

                    A discovery endpoint /openapi/v3 is provided to see a list of all group/versions available. This endpoint only returns JSON. These group/versions are provided in the following format:

                    {
                        "paths": {
                            ...,
                            "api/v1": {
                                "serverRelativeURL": "/openapi/v3/api/v1?hash=CC0E9BFD992D8C59AEC98A1E2336F899E8318D3CF4C68944C3DEC640AF5AB52D864AC50DAA8D145B3494F75FA3CFF939FCBDDA431DAD3CA79738B297795818CF"
                            },
                            "apis/admissionregistration.k8s.io/v1": {
                                "serverRelativeURL": "/openapi/v3/apis/admissionregistration.k8s.io/v1?hash=E19CC93A116982CE5422FC42B590A8AFAD92CDE9AE4D59B5CAAD568F083AD07946E6CB5817531680BCE6E215C16973CD39003B0425F3477CFD854E89A9DB6597"
                            },
                            ....
                        }
                    }
                    

                    The relative URLs are pointing to immutable OpenAPI descriptions, in order to improve client-side caching. The proper HTTP caching headers are also set by the API server for that purpose (Expires to 1 year in the future, and Cache-Control to immutable). When an obsolete URL is used, the API server returns a redirect to the newest URL.

                    The Kubernetes API server publishes an OpenAPI v3 spec per Kubernetes group version at the /openapi/v3/apis/<group>/<version>?hash=<hash> endpoint.

                    Refer to the table below for accepted request headers.

                    Valid request header values for OpenAPI v3 queries
                    HeaderPossible valuesNotes
                    Accept-Encodinggzipnot supplying this header is also acceptable
                    Acceptapplication/com.github.proto-openapi.spec.v3@v1.0+protobufmainly for intra-cluster use
                    application/jsondefault
                    *serves application/json

                    A Golang implementation to fetch the OpenAPI V3 is provided in the package k8s.io/client-go/openapi3.

                    Kubernetes 1.34 publishes OpenAPI v2.0 and v3.0; there are no plans to support 3.1 in the near future.

                    Protobuf serialization

                    Kubernetes implements an alternative Protobuf based serialization format that is primarily intended for intra-cluster communication. For more information about this format, see the Kubernetes Protobuf serialization design proposal and the Interface Definition Language (IDL) files for each schema located in the Go packages that define the API objects.

                    Persistence

                    Kubernetes stores the serialized state of objects by writing them into etcd.

                    API groups and versioning

                    To make it easier to eliminate fields or restructure resource representations, Kubernetes supports multiple API versions, each at a different API path, such as /api/v1 or /apis/rbac.authorization.k8s.io/v1alpha1.

                    Versioning is done at the API level rather than at the resource or field level to ensure that the API presents a clear, consistent view of system resources and behavior, and to enable controlling access to end-of-life and/or experimental APIs.

                    To make it easier to evolve and to extend its API, Kubernetes implements API groups that can be enabled or disabled.

                    API resources are distinguished by their API group, resource type, namespace (for namespaced resources), and name. The API server handles the conversion between API versions transparently: all the different versions are actually representations of the same persisted data. The API server may serve the same underlying data through multiple API versions.

                    For example, suppose there are two API versions, v1 and v1beta1, for the same resource. If you originally created an object using the v1beta1 version of its API, you can later read, update, or delete that object using either the v1beta1 or the v1 API version, until the v1beta1 version is deprecated and removed. At that point you can continue accessing and modifying the object using the v1 API.

                    API changes

                    Any system that is successful needs to grow and change as new use cases emerge or existing ones change. Therefore, Kubernetes has designed the Kubernetes API to continuously change and grow. The Kubernetes project aims to not break compatibility with existing clients, and to maintain that compatibility for a length of time so that other projects have an opportunity to adapt.

                    In general, new API resources and new resource fields can be added often and frequently. Elimination of resources or fields requires following the API deprecation policy.

                    Kubernetes makes a strong commitment to maintain compatibility for official Kubernetes APIs once they reach general availability (GA), typically at API version v1. Additionally, Kubernetes maintains compatibility with data persisted via beta API versions of official Kubernetes APIs, and ensures that data can be converted and accessed via GA API versions when the feature goes stable.

                    If you adopt a beta API version, you will need to transition to a subsequent beta or stable API version once the API graduates. The best time to do this is while the beta API is in its deprecation period, since objects are simultaneously accessible via both API versions. Once the beta API completes its deprecation period and is no longer served, the replacement API version must be used.

                    Refer to API versions reference for more details on the API version level definitions.

                    API Extension

                    The Kubernetes API can be extended in one of two ways:

                    1. Custom resources let you declaratively define how the API server should provide your chosen resource API.
                    2. You can also extend the Kubernetes API by implementing an aggregation layer.

                    What's next

                    Last modified January 08, 2025 at 10:50 AM PST: Fix feature gate name conflicts (2/2) (3782732ce4)
                    SubjectAccessReview | Kubernetes

                    SubjectAccessReview

                    SubjectAccessReview checks whether or not a user or group can perform an action.

                    apiVersion: authorization.k8s.io/v1

                    import "k8s.io/api/authorization/v1"

                    SubjectAccessReview

                    SubjectAccessReview checks whether or not a user or group can perform an action.


                    SubjectAccessReviewSpec

                    SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set


                    • extra (map[string][]string)

                      Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here.

                    • groups ([]string)

                      Atomic: will be replaced during a merge

                      Groups is the groups you're testing for.

                    • nonResourceAttributes (NonResourceAttributes)

                      NonResourceAttributes describes information for a non-resource access request

                      NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface

                      • nonResourceAttributes.path (string)

                        Path is the URL path of the request

                      • nonResourceAttributes.verb (string)

                        Verb is the standard HTTP verb

                    • resourceAttributes (ResourceAttributes)

                      ResourceAuthorizationAttributes describes information for a resource access request

                      ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface

                      • resourceAttributes.fieldSelector (FieldSelectorAttributes)

                        fieldSelector describes the limitation on access based on field. It can only limit access, not broaden it.

                        *FieldSelectorAttributes indicates a field limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https://www.oxeye.io/resources/golang-parameter-smuggling-attack for more details. For the SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid.

                        • resourceAttributes.fieldSelector.rawSelector (string)

                          rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present.

                        • resourceAttributes.fieldSelector.requirements ([]FieldSelectorRequirement)

                          Atomic: will be replaced during a merge

                          requirements is the parsed interpretation of a field selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood.

                          FieldSelectorRequirement is a selector that contains values, a key, and an operator that relates the key and values.

                          • resourceAttributes.fieldSelector.requirements.key (string), required

                            key is the field selector key that the requirement applies to.

                          • resourceAttributes.fieldSelector.requirements.operator (string), required

                            operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. The list of operators may grow in the future.

                          • resourceAttributes.fieldSelector.requirements.values ([]string)

                            Atomic: will be replaced during a merge

                            values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty.

                      • resourceAttributes.group (string)

                        Group is the API Group of the Resource. "*" means all.

                      • resourceAttributes.labelSelector (LabelSelectorAttributes)

                        labelSelector describes the limitation on access based on labels. It can only limit access, not broaden it.

                        *LabelSelectorAttributes indicates a label limited access. Webhook authors are encouraged to * ensure rawSelector and requirements are not both set * consider the requirements field if set * not try to parse or consider the rawSelector field if set. This is to avoid another CVE-2022-2880 (i.e. getting different systems to agree on how exactly to parse a query is not something we want), see https://www.oxeye.io/resources/golang-parameter-smuggling-attack for more details. For the SubjectAccessReview endpoints of the kube-apiserver: * If rawSelector is empty and requirements are empty, the request is not limited. * If rawSelector is present and requirements are empty, the rawSelector will be parsed and limited if the parsing succeeds. * If rawSelector is empty and requirements are present, the requirements should be honored * If rawSelector is present and requirements are present, the request is invalid.

                        • resourceAttributes.labelSelector.rawSelector (string)

                          rawSelector is the serialization of a field selector that would be included in a query parameter. Webhook implementations are encouraged to ignore rawSelector. The kube-apiserver's *SubjectAccessReview will parse the rawSelector as long as the requirements are not present.

                        • resourceAttributes.labelSelector.requirements ([]LabelSelectorRequirement)

                          Atomic: will be replaced during a merge

                          requirements is the parsed interpretation of a label selector. All requirements must be met for a resource instance to match the selector. Webhook implementations should handle requirements, but how to handle them is up to the webhook. Since requirements can only limit the request, it is safe to authorize as unlimited request if the requirements are not understood.

                          A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

                          • resourceAttributes.labelSelector.requirements.key (string), required

                            key is the label key that the selector applies to.

                          • resourceAttributes.labelSelector.requirements.operator (string), required

                            operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.

                          • resourceAttributes.labelSelector.requirements.values ([]string)

                            Atomic: will be replaced during a merge

                            values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.

                      • resourceAttributes.name (string)

                        Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all.

                      • resourceAttributes.namespace (string)

                        Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview

                      • resourceAttributes.resource (string)

                        Resource is one of the existing resource types. "*" means all.

                      • resourceAttributes.subresource (string)

                        Subresource is one of the existing resource types. "" means none.

                      • resourceAttributes.verb (string)

                        Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all.

                      • resourceAttributes.version (string)

                        Version is the API Version of the Resource. "*" means all.

                    • uid (string)

                      UID information about the requesting user.

                    • user (string)

                      User is the user you're testing for. If you specify "User" but not "Groups", then is it interpreted as "What if User were not a member of any groups

                    SubjectAccessReviewStatus

                    SubjectAccessReviewStatus


                    • allowed (boolean), required

                      Allowed is required. True if the action would be allowed, false otherwise.

                    • denied (boolean)

                      Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true.

                    • evaluationError (string)

                      EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request.

                    • reason (string)

                      Reason is optional. It indicates why a request was allowed or denied.

                    Operations


                    create create a SubjectAccessReview

                    HTTP Request

                    POST /apis/authorization.k8s.io/v1/subjectaccessreviews

                    Parameters

                    Response

                    200 (SubjectAccessReview): OK

                    201 (SubjectAccessReview): Created

                    202 (SubjectAccessReview): Accepted

                    401: Unauthorized

                    This page is automatically generated.

                    If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.

                    Last modified September 04, 2025 at 3:37 PM PST: Update API resource reference for v1.34 (3e10e8c195)
                    ReplicationController | Kubernetes

                    ReplicationController

                    Legacy API for managing workloads that can scale horizontally. Superseded by the Deployment and ReplicaSet APIs.

                    A ReplicationController ensures that a specified number of pod replicas are running at any one time. In other words, a ReplicationController makes sure that a pod or a homogeneous set of pods is always up and available.

                    How a ReplicationController works

                    If there are too many pods, the ReplicationController terminates the extra pods. If there are too few, the ReplicationController starts more pods. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, are deleted, or are terminated. For example, your pods are re-created on a node after disruptive maintenance such as a kernel upgrade. For this reason, you should use a ReplicationController even if your application requires only a single pod. A ReplicationController is similar to a process supervisor, but instead of supervising individual processes on a single node, the ReplicationController supervises multiple pods across multiple nodes.

                    ReplicationController is often abbreviated to "rc" in discussion, and as a shortcut in kubectl commands.

                    A simple case is to create one ReplicationController object to reliably run one instance of a Pod indefinitely. A more complex use case is to run several identical replicas of a replicated service, such as web servers.

                    Running an example ReplicationController

                    This example ReplicationController config runs three copies of the nginx web server.

                    apiVersion: v1
                    kind: ReplicationController
                    metadata:
                      name: nginx
                    spec:
                      replicas: 3
                      selector:
                        app: nginx
                      template:
                        metadata:
                          name: nginx
                          labels:
                            app: nginx
                        spec:
                          containers:
                          - name: nginx
                            image: nginx
                            ports:
                            - containerPort: 80
                    

                    Run the example job by downloading the example file and then running this command:

                    kubectl apply -f https://k8s.io/examples/controllers/replication.yaml
                    

                    The output is similar to this:

                    replicationcontroller/nginx created
                    

                    Check on the status of the ReplicationController using this command:

                    kubectl describe replicationcontrollers/nginx
                    

                    The output is similar to this:

                    Name:        nginx
                    Namespace:   default
                    Selector:    app=nginx
                    Labels:      app=nginx
                    Annotations:    <none>
                    Replicas:    3 current / 3 desired
                    Pods Status: 0 Running / 3 Waiting / 0 Succeeded / 0 Failed
                    Pod Template:
                      Labels:       app=nginx
                      Containers:
                       nginx:
                        Image:              nginx
                        Port:               80/TCP
                        Environment:        <none>
                        Mounts:             <none>
                      Volumes:              <none>
                    Events:
                      FirstSeen       LastSeen     Count    From                        SubobjectPath    Type      Reason              Message
                      ---------       --------     -----    ----                        -------------    ----      ------              -------
                      20s             20s          1        {replication-controller }                    Normal    SuccessfulCreate    Created pod: nginx-qrm3m
                      20s             20s          1        {replication-controller }                    Normal    SuccessfulCreate    Created pod: nginx-3ntk0
                      20s             20s          1        {replication-controller }                    Normal    SuccessfulCreate    Created pod: nginx-4ok8v
                    

                    Here, three pods are created, but none is running yet, perhaps because the image is being pulled. A little later, the same command may show:

                    Pods Status:    3 Running / 0 Waiting / 0 Succeeded / 0 Failed
                    

                    To list all the pods that belong to the ReplicationController in a machine readable form, you can use a command like this:

                    pods=$(kubectl get pods --selector=app=nginx --output=jsonpath={.items..metadata.name})
                    echo $pods
                    

                    The output is similar to this:

                    nginx-3ntk0 nginx-4ok8v nginx-qrm3m
                    

                    Here, the selector is the same as the selector for the ReplicationController (seen in the kubectl describe output), and in a different form in replication.yaml. The --output=jsonpath option specifies an expression with the name from each pod in the returned list.

                    Writing a ReplicationController Manifest

                    As with all other Kubernetes config, a ReplicationController needs apiVersion, kind, and metadata fields.

                    When the control plane creates new Pods for a ReplicationController, the .metadata.name of the ReplicationController is part of the basis for naming those Pods. The name of a ReplicationController must be a valid DNS subdomain value, but this can produce unexpected results for the Pod hostnames. For best compatibility, the name should follow the more restrictive rules for a DNS label.

                    For general information about working with configuration files, see object management.

                    A ReplicationController also needs a .spec section.

                    Pod Template

                    The .spec.template is the only required field of the .spec.

                    The .spec.template is a pod template. It has exactly the same schema as a Pod, except it is nested and does not have an apiVersion or kind.

                    In addition to required fields for a Pod, a pod template in a ReplicationController must specify appropriate labels and an appropriate restart policy. For labels, make sure not to overlap with other controllers. See pod selector.

                    Only a .spec.template.spec.restartPolicy equal to Always is allowed, which is the default if not specified.

                    For local container restarts, ReplicationControllers delegate to an agent on the node, for example the Kubelet.

                    Labels on the ReplicationController

                    The ReplicationController can itself have labels (.metadata.labels). Typically, you would set these the same as the .spec.template.metadata.labels; if .metadata.labels is not specified then it defaults to .spec.template.metadata.labels. However, they are allowed to be different, and the .metadata.labels do not affect the behavior of the ReplicationController.

                    Pod Selector

                    The .spec.selector field is a label selector. A ReplicationController manages all the pods with labels that match the selector. It does not distinguish between pods that it created or deleted and pods that another person or process created or deleted. This allows the ReplicationController to be replaced without affecting the running pods.

                    If specified, the .spec.template.metadata.labels must be equal to the .spec.selector, or it will be rejected by the API. If .spec.selector is unspecified, it will be defaulted to .spec.template.metadata.labels.

                    Also you should not normally create any pods whose labels match this selector, either directly, with another ReplicationController, or with another controller such as Job. If you do so, the ReplicationController thinks that it created the other pods. Kubernetes does not stop you from doing this.

                    If you do end up with multiple controllers that have overlapping selectors, you will have to manage the deletion yourself (see below).

                    Multiple Replicas

                    You can specify how many pods should run concurrently by setting .spec.replicas to the number of pods you would like to have running concurrently. The number running at any time may be higher or lower, such as if the replicas were just increased or decreased, or if a pod is gracefully shutdown, and a replacement starts early.

                    If you do not specify .spec.replicas, then it defaults to 1.

                    Working with ReplicationControllers

                    Deleting a ReplicationController and its Pods

                    To delete a ReplicationController and all its pods, use kubectl delete. Kubectl will scale the ReplicationController to zero and wait for it to delete each pod before deleting the ReplicationController itself. If this kubectl command is interrupted, it can be restarted.

                    When using the REST API or client library, you need to do the steps explicitly (scale replicas to 0, wait for pod deletions, then delete the ReplicationController).

                    Deleting only a ReplicationController

                    You can delete a ReplicationController without affecting any of its pods.

                    Using kubectl, specify the --cascade=orphan option to kubectl delete.

                    When using the REST API or client library, you can delete the ReplicationController object.

                    Once the original is deleted, you can create a new ReplicationController to replace it. As long as the old and new .spec.selector are the same, then the new one will adopt the old pods. However, it will not make any effort to make existing pods match a new, different pod template. To update pods to a new spec in a controlled way, use a rolling update.

                    Isolating pods from a ReplicationController

                    Pods may be removed from a ReplicationController's target set by changing their labels. This technique may be used to remove pods from service for debugging and data recovery. Pods that are removed in this way will be replaced automatically (assuming that the number of replicas is not also changed).

                    Common usage patterns

                    Rescheduling

                    As mentioned above, whether you have 1 pod you want to keep running, or 1000, a ReplicationController will ensure that the specified number of pods exists, even in the event of node failure or pod termination (for example, due to an action by another control agent).

                    Scaling

                    The ReplicationController enables scaling the number of replicas up or down, either manually or by an auto-scaling control agent, by updating the replicas field.

                    Rolling updates

                    The ReplicationController is designed to facilitate rolling updates to a service by replacing pods one-by-one.

                    As explained in #1353, the recommended approach is to create a new ReplicationController with 1 replica, scale the new (+1) and old (-1) controllers one by one, and then delete the old controller after it reaches 0 replicas. This predictably updates the set of pods regardless of unexpected failures.

                    Ideally, the rolling update controller would take application readiness into account, and would ensure that a sufficient number of pods were productively serving at any given time.

                    The two ReplicationControllers would need to create pods with at least one differentiating label, such as the image tag of the primary container of the pod, since it is typically image updates that motivate rolling updates.

                    Multiple release tracks

                    In addition to running multiple releases of an application while a rolling update is in progress, it's common to run multiple releases for an extended period of time, or even continuously, using multiple release tracks. The tracks would be differentiated by labels.

                    For instance, a service might target all pods with tier in (frontend), environment in (prod). Now say you have 10 replicated pods that make up this tier. But you want to be able to 'canary' a new version of this component. You could set up a ReplicationController with replicas set to 9 for the bulk of the replicas, with labels tier=frontend, environment=prod, track=stable, and another ReplicationController with replicas set to 1 for the canary, with labels tier=frontend, environment=prod, track=canary. Now the service is covering both the canary and non-canary pods. But you can mess with the ReplicationControllers separately to test things out, monitor the results, etc.

                    Using ReplicationControllers with Services

                    Multiple ReplicationControllers can sit behind a single service, so that, for example, some traffic goes to the old version, and some goes to the new version.

                    A ReplicationController will never terminate on its own, but it isn't expected to be as long-lived as services. Services may be composed of pods controlled by multiple ReplicationControllers, and it is expected that many ReplicationControllers may be created and destroyed over the lifetime of a service (for instance, to perform an update of pods that run the service). Both services themselves and their clients should remain oblivious to the ReplicationControllers that maintain the pods of the services.

                    Writing programs for Replication

                    Pods created by a ReplicationController are intended to be fungible and semantically identical, though their configurations may become heterogeneous over time. This is an obvious fit for replicated stateless servers, but ReplicationControllers can also be used to maintain availability of master-elected, sharded, and worker-pool applications. Such applications should use dynamic work assignment mechanisms, such as the RabbitMQ work queues, as opposed to static/one-time customization of the configuration of each pod, which is considered an anti-pattern. Any pod customization performed, such as vertical auto-sizing of resources (for example, cpu or memory), should be performed by another online controller process, not unlike the ReplicationController itself.

                    Responsibilities of the ReplicationController

                    The ReplicationController ensures that the desired number of pods matches its label selector and are operational. Currently, only terminated pods are excluded from its count. In the future, readiness and other information available from the system may be taken into account, we may add more controls over the replacement policy, and we plan to emit events that could be used by external clients to implement arbitrarily sophisticated replacement and/or scale-down policies.

                    The ReplicationController is forever constrained to this narrow responsibility. It itself will not perform readiness nor liveness probes. Rather than performing auto-scaling, it is intended to be controlled by an external auto-scaler (as discussed in #492), which would change its replicas field. We will not add scheduling policies (for example, spreading) to the ReplicationController. Nor should it verify that the pods controlled match the currently specified template, as that would obstruct auto-sizing and other automated processes. Similarly, completion deadlines, ordering dependencies, configuration expansion, and other features belong elsewhere. We even plan to factor out the mechanism for bulk pod creation (#170).

                    The ReplicationController is intended to be a composable building-block primitive. We expect higher-level APIs and/or tools to be built on top of it and other complementary primitives for user convenience in the future. The "macro" operations currently supported by kubectl (run, scale) are proof-of-concept examples of this. For instance, we could imagine something like Asgard managing ReplicationControllers, auto-scalers, services, scheduling policies, canaries, etc.

                    API Object

                    Replication controller is a top-level resource in the Kubernetes REST API. More details about the API object can be found at: ReplicationController API object.

                    Alternatives to ReplicationController

                    ReplicaSet

                    ReplicaSet is the next-generation ReplicationController that supports the new set-based label selector. It's mainly used by Deployment as a mechanism to orchestrate pod creation, deletion and updates. Note that we recommend using Deployments instead of directly using Replica Sets, unless you require custom update orchestration or don't require updates at all.

                    Deployment is a higher-level API object that updates its underlying Replica Sets and their Pods. Deployments are recommended if you want the rolling update functionality, because they are declarative, server-side, and have additional features.

                    Bare Pods

                    Unlike in the case where a user directly created pods, a ReplicationController replaces pods that are deleted or terminated for any reason, such as in the case of node failure or disruptive node maintenance, such as a kernel upgrade. For this reason, we recommend that you use a ReplicationController even if your application requires only a single pod. Think of it similarly to a process supervisor, only it supervises multiple pods across multiple nodes instead of individual processes on a single node. A ReplicationController delegates local container restarts to some agent on the node, such as the kubelet.

                    Job

                    Use a Job instead of a ReplicationController for pods that are expected to terminate on their own (that is, batch jobs).

                    DaemonSet

                    Use a DaemonSet instead of a ReplicationController for pods that provide a machine-level function, such as machine monitoring or machine logging. These pods have a lifetime that is tied to a machine lifetime: the pod needs to be running on the machine before other pods start, and are safe to terminate when the machine is otherwise ready to be rebooted/shutdown.

                    What's next

                    • Learn about Pods.
                    • Learn about Deployment, the replacement for ReplicationController.
                    • ReplicationController is part of the Kubernetes REST API. Read the ReplicationController object definition to understand the API for replication controllers.
                    Last modified March 14, 2024 at 2:28 PM PST: Add metadata to use mechanism for API reference links (c889d9b251)
                    kubectl set serviceaccount | Kubernetes

                    kubectl set serviceaccount

                    Synopsis

                    Update the service account of pod template resources.

                    Possible resources (case insensitive) can be:

                    replicationcontroller (rc), deployment (deploy), daemonset (ds), job, replicaset (rs), statefulset

                    kubectl set serviceaccount (-f FILENAME | TYPE NAME) SERVICE_ACCOUNT
                    

                    Examples

                      # Set deployment nginx-deployment's service account to serviceaccount1
                      kubectl set serviceaccount deployment nginx-deployment serviceaccount1
                      
                      # Print the result (in YAML format) of updated nginx deployment with the service account from local file, without hitting the API server
                      kubectl set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run=client -o yaml
                    

                    Options

                    --all

                    Select all resources, in the namespace of the specified resource types

                    --allow-missing-template-keys     Default: true

                    If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.

                    --dry-run string[="unchanged"]     Default: "none"

                    Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource.

                    --field-manager string     Default: "kubectl-set"

                    Name of the manager used to track field ownership.

                    -f, --filename strings

                    Filename, directory, or URL to files identifying the resource to get from a server.

                    -h, --help

                    help for serviceaccount

                    -k, --kustomize string

                    Process the kustomization directory. This flag can't be used together with -f or -R.

                    --local

                    If true, set serviceaccount will NOT contact api-server but run locally.

                    -o, --output string

                    Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).

                    -R, --recursive

                    Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.

                    --show-managed-fields

                    If true, keep the managedFields when printing objects in JSON or YAML format.

                    --template string

                    Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

                    Parent Options Inherited

                    --as string

                    Username to impersonate for the operation. User could be a regular user or a service account in a namespace.

                    --as-group strings

                    Group to impersonate for the operation, this flag can be repeated to specify multiple groups.

                    --as-uid string

                    UID to impersonate for the operation.

                    --cache-dir string     Default: "$HOME/.kube/cache"

                    Default cache directory

                    --certificate-authority string

                    Path to a cert file for the certificate authority

                    --client-certificate string

                    Path to a client certificate file for TLS

                    --client-key string

                    Path to a client key file for TLS

                    --cluster string

                    The name of the kubeconfig cluster to use

                    --context string

                    The name of the kubeconfig context to use

                    --disable-compression

                    If true, opt-out of response compression for all requests to the server

                    --insecure-skip-tls-verify

                    If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure

                    --kubeconfig string

                    Path to the kubeconfig file to use for CLI requests.

                    --kuberc string

                    Path to the kuberc file to use for preferences. This can be disabled by exporting KUBECTL_KUBERC=false feature gate or turning off the feature KUBERC=off.

                    --match-server-version

                    Require server version to match client version

                    -n, --namespace string

                    If present, the namespace scope for this CLI request

                    --password string

                    Password for basic authentication to the API server

                    --profile string     Default: "none"

                    Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)

                    --profile-output string     Default: "profile.pprof"

                    Name of the file to write the profile to

                    --request-timeout string     Default: "0"

                    The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.

                    -s, --server string

                    The address and port of the Kubernetes API server

                    --storage-driver-buffer-duration duration     Default: 1m0s

                    Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction

                    --storage-driver-db string     Default: "cadvisor"

                    database name

                    --storage-driver-host string     Default: "localhost:8086"

                    database host:port

                    --storage-driver-password string     Default: "root"

                    database password

                    --storage-driver-secure

                    use secure connection with database

                    --storage-driver-table string     Default: "stats"

                    table name

                    --storage-driver-user string     Default: "root"

                    database username

                    --tls-server-name string

                    Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used

                    --token string

                    Bearer token for authentication to the API server

                    --user string

                    The name of the kubeconfig user to use

                    --username string

                    Username for basic authentication to the API server

                    --version version[=true]

                    --version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version

                    --warnings-as-errors

                    Treat warnings received from the server as errors and exit with a non-zero exit code

                    See Also

                    This page is automatically generated.

                    If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.

                    Last modified September 04, 2025 at 3:30 PM PST: Update kubectl reference for v1.34 (bdc4bba2a5)
                    kubectl wait | Kubernetes

                    kubectl wait

                    Synopsis

                    Experimental: Wait for a specific condition on one or many resources.

                    The command takes multiple resources and waits until the specified condition is seen in the Status field of every given resource.

                    Alternatively, the command can wait for the given set of resources to be created or deleted by providing the "create" or "delete" keyword as the value to the --for flag.

                    A successful message will be printed to stdout indicating when the specified condition has been met. You can use -o option to change to output destination.

                    kubectl wait ([-f FILENAME] | resource.group/resource.name | resource.group [(-l label | --all)]) [--for=create|--for=delete|--for condition=available|--for=jsonpath='{}'[=value]]
                    

                    Examples

                      # Wait for the pod "busybox1" to contain the status condition of type "Ready"
                      kubectl wait --for=condition=Ready pod/busybox1
                      
                      # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity)
                      kubectl wait --for=condition=Ready=false pod/busybox1
                      
                      # Wait for the pod "busybox1" to contain the status phase to be "Running"
                      kubectl wait --for=jsonpath='{.status.phase}'=Running pod/busybox1
                      
                      # Wait for pod "busybox1" to be Ready
                      kubectl wait --for='jsonpath={.status.conditions[?(@.type=="Ready")].status}=True' pod/busybox1
                      
                      # Wait for the service "loadbalancer" to have ingress
                      kubectl wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer
                      
                      # Wait for the secret "busybox1" to be created, with a timeout of 30s
                      kubectl create secret generic busybox1
                      kubectl wait --for=create secret/busybox1 --timeout=30s
                      
                      # Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command
                      kubectl delete pod/busybox1
                      kubectl wait --for=delete pod/busybox1 --timeout=60s
                    

                    Options

                    --all

                    Select all resources in the namespace of the specified resource types

                    -A, --all-namespaces

                    If present, list the requested object(s) across all namespaces. Namespace in current context is ignored even if specified with --namespace.

                    --allow-missing-template-keys     Default: true

                    If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats.

                    --field-selector string

                    Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.

                    -f, --filename strings

                    identifying the resource.

                    --for string

                    The condition to wait on: [create|delete|condition=condition-name[=condition-value]|jsonpath='{JSONPath expression}'=[JSONPath value]]. The default condition-value is true. Condition values are compared after Unicode simple case folding, which is a more general form of case-insensitivity.

                    -h, --help

                    help for wait

                    --local

                    If true, annotation will NOT contact api-server but run locally.

                    -o, --output string

                    Output format. One of: (json, yaml, name, go-template, go-template-file, template, templatefile, jsonpath, jsonpath-as-json, jsonpath-file).

                    -R, --recursive     Default: true

                    Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory.

                    -l, --selector string

                    Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)

                    --show-managed-fields

                    If true, keep the managedFields when printing objects in JSON or YAML format.

                    --template string

                    Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

                    --timeout duration     Default: 30s

                    The length of time to wait before giving up. Zero means check once and don't wait, negative means wait for a week.

                    Parent Options Inherited

                    --as string

                    Username to impersonate for the operation. User could be a regular user or a service account in a namespace.

                    --as-group strings

                    Group to impersonate for the operation, this flag can be repeated to specify multiple groups.

                    --as-uid string

                    UID to impersonate for the operation.

                    --cache-dir string     Default: "$HOME/.kube/cache"

                    Default cache directory

                    --certificate-authority string

                    Path to a cert file for the certificate authority

                    --client-certificate string

                    Path to a client certificate file for TLS

                    --client-key string

                    Path to a client key file for TLS

                    --cluster string

                    The name of the kubeconfig cluster to use

                    --context string

                    The name of the kubeconfig context to use

                    --disable-compression

                    If true, opt-out of response compression for all requests to the server

                    --insecure-skip-tls-verify

                    If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure

                    --kubeconfig string

                    Path to the kubeconfig file to use for CLI requests.

                    --kuberc string

                    Path to the kuberc file to use for preferences. This can be disabled by exporting KUBECTL_KUBERC=false feature gate or turning off the feature KUBERC=off.

                    --match-server-version

                    Require server version to match client version

                    -n, --namespace string

                    If present, the namespace scope for this CLI request

                    --password string

                    Password for basic authentication to the API server

                    --profile string     Default: "none"

                    Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex)

                    --profile-output string     Default: "profile.pprof"

                    Name of the file to write the profile to

                    --request-timeout string     Default: "0"

                    The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests.

                    -s, --server string

                    The address and port of the Kubernetes API server

                    --storage-driver-buffer-duration duration     Default: 1m0s

                    Writes in the storage driver will be buffered for this duration, and committed to the non memory backends as a single transaction

                    --storage-driver-db string     Default: "cadvisor"

                    database name

                    --storage-driver-host string     Default: "localhost:8086"

                    database host:port

                    --storage-driver-password string     Default: "root"

                    database password

                    --storage-driver-secure

                    use secure connection with database

                    --storage-driver-table string     Default: "stats"

                    table name

                    --storage-driver-user string     Default: "root"

                    database username

                    --tls-server-name string

                    Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used

                    --token string

                    Bearer token for authentication to the API server

                    --user string

                    The name of the kubeconfig user to use

                    --username string

                    Username for basic authentication to the API server

                    --version version[=true]

                    --version, --version=raw prints version information and quits; --version=vX.Y.Z... sets the reported version

                    --warnings-as-errors

                    Treat warnings received from the server as errors and exit with a non-zero exit code

                    See Also

                    • kubectl - kubectl controls the Kubernetes cluster manager

                    This page is automatically generated.

                    If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.

                    Last modified September 04, 2025 at 3:30 PM PST: Update kubectl reference for v1.34 (bdc4bba2a5)
                    Common Parameters | Kubernetes

                    Common Parameters

                    allowWatchBookmarks

                    allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored.


                    continue

                    The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the "next key".

                    This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.


                    dryRun

                    When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed


                    fieldManager

                    fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint.


                    fieldSelector

                    A selector to restrict the list of returned objects by their fields. Defaults to everything.


                    fieldValidation

                    fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered.


                    force

                    Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.


                    gracePeriodSeconds

                    The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately.


                    ignoreStoreReadErrorWithClusterBreakingPotential

                    if set to true, it will trigger an unsafe deletion of the resource in case the normal deletion flow fails with a corrupt object error. A resource is considered corrupt if it can not be retrieved from the underlying storage successfully because of a) its data can not be transformed e.g. decryption failure, or b) it fails to decode into an object. NOTE: unsafe deletion ignores finalizer constraints, skips precondition checks, and removes the object from the storage. WARNING: This may potentially break the cluster if the workload associated with the resource being unsafe-deleted relies on normal deletion flow. Use only if you REALLY know what you are doing. The default value is false, and the user must opt in to enable it


                    labelSelector

                    A selector to restrict the list of returned objects by their labels. Defaults to everything.


                    limit

                    limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.

                    The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.


                    namespace

                    object name and auth scope, such as for teams and projects


                    pretty

                    If 'true', then the output is pretty printed. Defaults to 'false' unless the user-agent indicates a browser or command-line HTTP tool (curl and wget).


                    propagationPolicy

                    Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground.


                    resourceVersion

                    resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.

                    Defaults to unset


                    resourceVersionMatch

                    resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details.

                    Defaults to unset


                    sendInitialEvents

                    sendInitialEvents=true may be set together with watch=true. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched.

                    When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion" and the bookmark event is send when the state is synced to a resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed.

                    • resourceVersionMatch set to any other value or unset Invalid error is returned.

                    Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise.


                    timeoutSeconds

                    Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.


                    watch

                    Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.


                    This page is automatically generated.

                    If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.

                    Last modified September 04, 2025 at 3:37 PM PST: Update API resource reference for v1.34 (3e10e8c195)
                    Extend Resources | Kubernetes

                    Extend Resources


                    CustomResourceDefinition

                    CustomResourceDefinition represents a resource that should be exposed on the API server.

                    DeviceClass

                    DeviceClass is a vendor- or admin-provided resource that contains device configuration and selectors.

                    MutatingWebhookConfiguration

                    MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object.

                    ValidatingWebhookConfiguration

                    ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it.

                    This page is automatically generated.

                    If you plan to report an issue with this page, mention that the page is auto-generated in your issue description. The fix may need to happen elsewhere in the Kubernetes project.

                    Last modified April 18, 2021 at 5:30 PM PST: Add auto_gerated metadata (21a2715031)
                    Verify Signed Kubernetes Artifacts | Kubernetes

                    Verify Signed Kubernetes Artifacts

                    FEATURE STATE: Kubernetes v1.26 [beta]

                    Before you begin

                    You will need to have the following tools installed:

                    Verifying binary signatures

                    The Kubernetes release process signs all binary artifacts (tarballs, SPDX files, standalone binaries) by using cosign's keyless signing. To verify a particular binary, retrieve it together with its signature and certificate:

                    URL=https://dl.k8s.io/release/v1.34.0/bin/linux/amd64
                    BINARY=kubectl
                    
                    FILES=(
                        "$BINARY"
                        "$BINARY.sig"
                        "$BINARY.cert"
                    )
                    
                    for FILE in "${FILES[@]}"; do
                        curl -sSfL --retry 3 --retry-delay 3 "$URL/$FILE" -o "$FILE"
                    done
                    

                    Then verify the blob by using cosign verify-blob:

                    cosign verify-blob "$BINARY" \
                      --signature "$BINARY".sig \
                      --certificate "$BINARY".cert \
                      --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \
                      --certificate-oidc-issuer https://accounts.google.com
                    

                    Verifying image signatures

                    For a complete list of images that are signed please refer to Releases.

                    Pick one image from this list and verify its signature using the cosign verify command:

                    cosign verify registry.k8s.io/kube-apiserver-amd64:v1.34.0 \
                      --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \
                      --certificate-oidc-issuer https://accounts.google.com \
                      | jq .
                    

                    Verifying images for all control plane components

                    To verify all signed control plane images for the latest stable version (v1.34.0), please run the following commands:

                    curl -Ls "https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release" \
                      | grep "SPDXID: SPDXRef-Package-registry.k8s.io" \
                      | grep -v sha256 | cut -d- -f3- | sed 's/-/\//' | sed 's/-v1/:v1/' \
                      | sort > images.txt
                    input=images.txt
                    while IFS= read -r image
                    do
                      cosign verify "$image" \
                        --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \
                        --certificate-oidc-issuer https://accounts.google.com \
                        | jq .
                    done < "$input"
                    

                    Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:

                    registry-url/image-name@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
                    

                    For more information, please refer to the Image Pull Policy section.

                    Verifying Image Signatures with Admission Controller

                    For non-control plane images (for example conformance image), signatures can also be verified at deploy time using sigstore policy-controller admission controller.

                    Here are some helpful resources to get started with policy-controller:

                    Verify the Software Bill Of Materials

                    You can verify the Kubernetes Software Bill of Materials (SBOM) by using the sigstore certificate and signature, or the corresponding SHA files:

                    # Retrieve the latest available Kubernetes release version
                    VERSION=$(curl -Ls https://dl.k8s.io/release/stable.txt)
                    
                    # Verify the SHA512 sum
                    curl -Ls "https://sbom.k8s.io/$VERSION/release" -o "$VERSION.spdx"
                    echo "$(curl -Ls "https://sbom.k8s.io/$VERSION/release.sha512") $VERSION.spdx" | sha512sum --check
                    
                    # Verify the SHA256 sum
                    echo "$(curl -Ls "https://sbom.k8s.io/$VERSION/release.sha256") $VERSION.spdx" | sha256sum --check
                    
                    # Retrieve sigstore signature and certificate
                    curl -Ls "https://sbom.k8s.io/$VERSION/release.sig" -o "$VERSION.spdx.sig"
                    curl -Ls "https://sbom.k8s.io/$VERSION/release.cert" -o "$VERSION.spdx.cert"
                    
                    # Verify the sigstore signature
                    cosign verify-blob \
                        --certificate "$VERSION.spdx.cert" \
                        --signature "$VERSION.spdx.sig" \
                        --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \
                        --certificate-oidc-issuer https://accounts.google.com \
                        "$VERSION.spdx"
                    
                    Last modified September 17, 2024 at 1:06 PM PST: Update verify-signed-artifacts.md (db70855a55)
                    Cluster Architecture | Kubernetes

                    Cluster Architecture

                    The architectural concepts behind Kubernetes.

                    A Kubernetes cluster consists of a control plane plus a set of worker machines, called nodes, that run containerized applications. Every cluster needs at least one worker node in order to run Pods.

                    The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster. In production environments, the control plane usually runs across multiple computers and a cluster usually runs multiple nodes, providing fault-tolerance and high availability.

                    This document outlines the various components you need to have for a complete and working Kubernetes cluster.

                    The control plane (kube-apiserver, etcd, kube-controller-manager, kube-scheduler) and several nodes. Each node is running a kubelet and kube-proxy.

                    Figure 1. Kubernetes cluster components.

                    About this architecture

                    The diagram in Figure 1 presents an example reference architecture for a Kubernetes cluster. The actual distribution of components can vary based on specific cluster setups and requirements.

                    In the diagram, each node runs the kube-proxy component. You need a network proxy component on each node to ensure that the Service API and associated behaviors are available on your cluster network. However, some network plugins provide their own, third party implementation of proxying. When you use that kind of network plugin, the node does not need to run kube-proxy.

                    Control plane components

                    The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a Deployment's replicas field is unsatisfied).

                    Control plane components can be run on any machine in the cluster. However, for simplicity, setup scripts typically start all control plane components on the same machine, and do not run user containers on this machine. See Creating Highly Available clusters with kubeadm for an example control plane setup that runs across multiple machines.

                    kube-apiserver

                    The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane.

                    The main implementation of a Kubernetes API server is kube-apiserver. kube-apiserver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-apiserver and balance traffic between those instances.

                    etcd

                    Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.

                    If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for the data.

                    You can find in-depth information about etcd in the official documentation.

                    kube-scheduler

                    Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.

                    Factors taken into account for scheduling decisions include: individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.

                    kube-controller-manager

                    Control plane component that runs controller processes.

                    Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

                    There are many different types of controllers. Some examples of them are:

                    • Node controller: Responsible for noticing and responding when nodes go down.
                    • Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
                    • EndpointSlice controller: Populates EndpointSlice objects (to provide a link between Services and Pods).
                    • ServiceAccount controller: Create default ServiceAccounts for new namespaces.

                    The above is not an exhaustive list.

                    cloud-controller-manager

                    A Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API, and separates out the components that interact with that cloud platform from components that only interact with your cluster.

                    The cloud-controller-manager only runs controllers that are specific to your cloud provider. If you are running Kubernetes on your own premises, or in a learning environment inside your own PC, the cluster does not have a cloud controller manager.

                    As with the kube-controller-manager, the cloud-controller-manager combines several logically independent control loops into a single binary that you run as a single process. You can scale horizontally (run more than one copy) to improve performance or to help tolerate failures.

                    The following controllers can have cloud provider dependencies:

                    • Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
                    • Route controller: For setting up routes in the underlying cloud infrastructure
                    • Service controller: For creating, updating and deleting cloud provider load balancers

                    Node components

                    Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.

                    kubelet

                    An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.

                    The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.

                    kube-proxy (optional)

                    kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.

                    kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.

                    kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.

                    If you use a network plugin that implements packet forwarding for Services by itself, and providing equivalent behavior to kube-proxy, then you do not need to run kube-proxy on the nodes in your cluster.

                    Container runtime

                    A fundamental component that empowers Kubernetes to run containers effectively. It is responsible for managing the execution and lifecycle of containers within the Kubernetes environment.

                    Kubernetes supports container runtimes such as containerd, CRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface).

                    Addons

                    Addons use Kubernetes resources (DaemonSet, Deployment, etc) to implement cluster features. Because these are providing cluster-level features, namespaced resources for addons belong within the kube-system namespace.

                    Selected addons are described below; for an extended list of available addons, please see Addons.

                    DNS

                    While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.

                    Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.

                    Containers started by Kubernetes automatically include this DNS server in their DNS searches.

                    Web UI (Dashboard)

                    Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.

                    Container resource monitoring

                    Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.

                    Cluster-level Logging

                    A cluster-level logging mechanism is responsible for saving container logs to a central log store with a search/browsing interface.

                    Network plugins

                    Network plugins are software components that implement the container network interface (CNI) specification. They are responsible for allocating IP addresses to pods and enabling them to communicate with each other within the cluster.

                    Architecture variations

                    While the core components of Kubernetes remain consistent, the way they are deployed and managed can vary. Understanding these variations is crucial for designing and maintaining Kubernetes clusters that meet specific operational needs.

                    Control plane deployment options

                    The control plane components can be deployed in several ways:

                    Traditional deployment
                    Control plane components run directly on dedicated machines or VMs, often managed as systemd services.
                    Static Pods
                    Control plane components are deployed as static Pods, managed by the kubelet on specific nodes. This is a common approach used by tools like kubeadm.
                    Self-hosted
                    The control plane runs as Pods within the Kubernetes cluster itself, managed by Deployments and StatefulSets or other Kubernetes primitives.
                    Managed Kubernetes services
                    Cloud providers often abstract away the control plane, managing its components as part of their service offering.

                    Workload placement considerations

                    The placement of workloads, including the control plane components, can vary based on cluster size, performance requirements, and operational policies:

                    • In smaller or development clusters, control plane components and user workloads might run on the same nodes.
                    • Larger production clusters often dedicate specific nodes to control plane components, separating them from user workloads.
                    • Some organizations run critical add-ons or monitoring tools on control plane nodes.

                    Cluster management tools

                    Tools like kubeadm, kops, and Kubespray offer different approaches to deploying and managing clusters, each with its own method of component layout and management.

                    Customization and extensibility

                    Kubernetes architecture allows for significant customization:

                    • Custom schedulers can be deployed to work alongside the default Kubernetes scheduler or to replace it entirely.
                    • API servers can be extended with CustomResourceDefinitions and API Aggregation.
                    • Cloud providers can integrate deeply with Kubernetes using the cloud-controller-manager.

                    The flexibility of Kubernetes architecture allows organizations to tailor their clusters to specific needs, balancing factors such as operational complexity, performance, and management overhead.

                    What's next

                    Learn more about the following:

                    Last modified November 16, 2025 at 9:04 PM PST: Move paragraph to correct section (7448d1725b)
                    Verify Signed Kubernetes Artifacts | Kubernetes Verify Signed Kubernetes Artifacts | Kubernetes

                    You are viewing documentation for Kubernetes version: v1.30

                    Kubernetes v1.30 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date information, see the latest version.

                    Verify Signed Kubernetes Artifacts

                    FEATURE STATE: Kubernetes v1.26 [beta]

                    Before you begin

                    You will need to have the following tools installed:

                    Verifying binary signatures

                    The Kubernetes release process signs all binary artifacts (tarballs, SPDX files, standalone binaries) by using cosign's keyless signing. To verify a particular binary, retrieve it together with its signature and certificate:

                    URL=https://dl.k8s.io/release/v1.30.0/bin/linux/amd64
                    BINARY=kubectl
                    
                    FILES=(
                        "$BINARY"
                        "$BINARY.sig"
                        "$BINARY.cert"
                    )
                    
                    for FILE in "${FILES[@]}"; do
                        curl -sSfL --retry 3 --retry-delay 3 "$URL/$FILE" -o "$FILE"
                    done
                    

                    Then verify the blob by using cosign verify-blob:

                    cosign verify-blob "$BINARY" \
                      --signature "$BINARY".sig \
                      --certificate "$BINARY".cert \
                      --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \
                      --certificate-oidc-issuer https://accounts.google.com
                    

                    Verifying image signatures

                    For a complete list of images that are signed please refer to Releases.

                    Pick one image from this list and verify its signature using the cosign verify command:

                    cosign verify registry.k8s.io/kube-apiserver-amd64:v1.30.0 \
                      --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \
                      --certificate-oidc-issuer https://accounts.google.com \
                      | jq .
                    

                    Verifying images for all control plane components

                    To verify all signed control plane images for the latest stable version (v1.30.0), please run the following commands:

                    curl -Ls "https://sbom.k8s.io/$(curl -Ls https://dl.k8s.io/release/stable.txt)/release" \
                      | grep "SPDXID: SPDXRef-Package-registry.k8s.io" \
                      | grep -v sha256 | cut -d- -f3- | sed 's/-/\//' | sed 's/-v1/:v1/' \
                      | sort > images.txt
                    input=images.txt
                    while IFS= read -r image
                    do
                      cosign verify "$image" \
                        --certificate-identity krel-trust@k8s-releng-prod.iam.gserviceaccount.com \
                        --certificate-oidc-issuer https://accounts.google.com \
                        | jq .
                    done < "$input"
                    

                    Once you have verified an image, you can specify the image by its digest in your Pod manifests as per this example:

                    registry-url/image-name@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2
                    

                    For more information, please refer to the Image Pull Policy section.

                    Verifying Image Signatures with Admission Controller

                    For non-control plane images (for example conformance image), signatures can also be verified at deploy time using sigstore policy-controller admission controller.

                    Here are some helpful resources to get started with policy-controller:

                    Verify the Software Bill Of Materials

                    You can verify the Kubernetes Software Bill of Materials (SBOM) by using the sigstore certificate and signature, or the corresponding SHA files:

                    # Retrieve the latest available Kubernetes release version
                    VERSION=$(curl -Ls https://dl.k8s.io/release/stable.txt)
                    
                    # Verify the SHA512 sum
                    curl -Ls "https://sbom.k8s.io/$VERSION/release" -o "$VERSION.spdx"
                    echo "$(curl -Ls "https://sbom.k8s.io/$VERSION/release.sha512") $VERSION.spdx" | sha512sum --check
                    
                    # Verify the SHA256 sum
                    echo "$(curl -Ls "https://sbom.k8s.io/$VERSION/release.sha256") $VERSION.spdx" | sha256sum --check
                    
                    # Retrieve sigstore signature and certificate
                    curl -Ls "https://sbom.k8s.io/$VERSION/release.sig" -o "$VERSION.spdx.sig"
                    curl -Ls "https://sbom.k8s.io/$VERSION/release.cert" -o "$VERSION.spdx.cert"
                    
                    # Verify the sigstore signature
                    cosign verify-blob \
                        --certificate "$VERSION.spdx.cert" \
                        --signature "$VERSION.spdx.sig" \
                        --certificate-identity krel-staging@k8s-releng-prod.iam.gserviceaccount.com \
                        --certificate-oidc-issuer https://accounts.google.com \
                        "$VERSION.spdx"
                    
                    Last modified December 24, 2023 at 9:00 PM PST: fix typos (d536e46dbd)
                    Use a SOCKS5 Proxy to Access the Kubernetes API | Kubernetes

                    Use a SOCKS5 Proxy to Access the Kubernetes API

                    FEATURE STATE: Kubernetes v1.24 [stable]

                    This page shows how to use a SOCKS5 proxy to access the API of a remote Kubernetes cluster. This is useful when the cluster you want to access does not expose its API directly on the public internet.

                    Before you begin

                    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

                    Your Kubernetes server must be at or later than version v1.24.

                    To check the version, enter kubectl version.

                    You need SSH client software (the ssh tool), and an SSH service running on the remote server. You must be able to log in to the SSH service on the remote server.

                    Task context

                    Figure 1 represents what you're going to achieve in this task.

                    • You have a client computer, referred to as local in the steps ahead, from where you're going to create requests to talk to the Kubernetes API.
                    • The Kubernetes server/API is hosted on a remote server.
                    • You will use SSH client and server software to create a secure SOCKS5 tunnel between the local and the remote server. The HTTPS traffic between the client and the Kubernetes API will flow over the SOCKS5 tunnel, which is itself tunnelled over SSH.

                    graph LR; subgraph local[Local client machine] client([client])-. local
                    traffic .-> local_ssh[Local SSH
                    SOCKS5 proxy]; end local_ssh[SSH
                    SOCKS5
                    proxy]-- SSH Tunnel -->sshd subgraph remote[Remote server] sshd[SSH
                    server]-- local traffic -->service1; end client([client])-. proxied HTTPs traffic
                    going through the proxy .->service1[Kubernetes API]; classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; classDef cluster fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; class ingress,service1,service2,pod1,pod2,pod3,pod4 k8s; class client plain; class cluster cluster;
                    Figure 1. SOCKS5 tutorial components

                    Using ssh to create a SOCKS5 proxy

                    The following command starts a SOCKS5 proxy between your client machine and the remote SOCKS server:

                    # The SSH tunnel continues running in the foreground after you run this
                    ssh -D 1080 -q -N username@kubernetes-remote-server.example
                    

                    The SOCKS5 proxy lets you connect to your cluster's API server based on the following configuration:

                    • -D 1080: opens a SOCKS proxy on local port :1080.
                    • -q: quiet mode. Causes most warning and diagnostic messages to be suppressed.
                    • -N: Do not execute a remote command. Useful for just forwarding ports.
                    • username@kubernetes-remote-server.example: the remote SSH server behind which the Kubernetes cluster is running (eg: a bastion host).

                    Client configuration

                    To access the Kubernetes API server through the proxy you must instruct kubectl to send queries through the SOCKS proxy we created earlier. Do this by either setting the appropriate environment variable, or via the proxy-url attribute in the kubeconfig file. Using an environment variable:

                    export HTTPS_PROXY=socks5://localhost:1080
                    

                    To always use this setting on a specific kubectl context, specify the proxy-url attribute in the relevant cluster entry within the ~/.kube/config file. For example:

                    apiVersion: v1
                    clusters:
                    - cluster:
                        certificate-authority-data: LRMEMMW2 # shortened for readability 
                        server: https://<API_SERVER_IP_ADDRESS>:6443  # the "Kubernetes API" server, in other words the IP address of kubernetes-remote-server.example
                        proxy-url: socks5://localhost:1080   # the "SSH SOCKS5 proxy" in the diagram above
                      name: default
                    contexts:
                    - context:
                        cluster: default
                        user: default
                      name: default
                    current-context: default
                    kind: Config
                    preferences: {}
                    users:
                    - name: default
                      user:
                        client-certificate-data: LS0tLS1CR== # shortened for readability
                        client-key-data: LS0tLS1CRUdJT=      # shortened for readability
                    

                    Once you have created the tunnel via the ssh command mentioned earlier, and defined either the environment variable or the proxy-url attribute, you can interact with your cluster through that proxy. For example:

                    kubectl get pods
                    
                    NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
                    kube-system   coredns-85cb69466-klwq8                  1/1     Running     0          5m46s
                    

                    Clean up

                    Stop the ssh port-forwarding process by pressing CTRL+C on the terminal where it is running.

                    Type unset https_proxy in a terminal to stop forwarding http traffic through the proxy.

                    Further reading

                    Last modified February 13, 2024 at 2:14 PM PST: Fix mermaid syntax error (69706582d4)
                    Taints and Tolerations | Kubernetes

                    Taints and Tolerations

                    Node affinity is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite -- they allow a node to repel a set of pods.

                    Tolerations are applied to pods. Tolerations allow the scheduler to schedule pods with matching taints. Tolerations allow scheduling but don't guarantee scheduling: the scheduler also evaluates other parameters as part of its function.

                    Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.

                    Concepts

                    You add a taint to a node using kubectl taint. For example,

                    kubectl taint nodes node1 key1=value1:NoSchedule
                    

                    places a taint on node node1. The taint has key key1, value value1, and taint effect NoSchedule. This means that no pod will be able to schedule onto node1 unless it has a matching toleration.

                    To remove the taint added by the command above, you can run:

                    kubectl taint nodes node1 key1=value1:NoSchedule-
                    

                    You specify a toleration for a pod in the PodSpec. Both of the following tolerations "match" the taint created by the kubectl taint line above, and thus a pod with either toleration would be able to schedule onto node1:

                    tolerations:
                    - key: "key1"
                      operator: "Equal"
                      value: "value1"
                      effect: "NoSchedule"
                    
                    tolerations:
                    - key: "key1"
                      operator: "Exists"
                      effect: "NoSchedule"
                    

                    The default Kubernetes scheduler takes taints and tolerations into account when selecting a node to run a particular Pod. However, if you manually specify the .spec.nodeName for a Pod, that action bypasses the scheduler; the Pod is then bound onto the node where you assigned it, even if there are NoSchedule taints on that node that you selected. If this happens and the node also has a NoExecute taint set, the kubelet will eject the Pod unless there is an appropriate tolerance set.

                    Here's an example of a pod that has some tolerations defined:

                    apiVersion: v1
                    kind: Pod
                    metadata:
                      name: nginx
                      labels:
                        env: test
                    spec:
                      containers:
                      - name: nginx
                        image: nginx
                        imagePullPolicy: IfNotPresent
                      tolerations:
                      - key: "example-key"
                        operator: "Exists"
                        effect: "NoSchedule"
                    

                    The default value for operator is Equal.

                    A toleration "matches" a taint if the keys are the same and the effects are the same, and:

                    • the operator is Exists (in which case no value should be specified), or
                    • the operator is Equal and the values should be equal.

                    The above example used the effect of NoSchedule. Alternatively, you can use the effect of PreferNoSchedule.

                    The allowed values for the effect field are:

                    NoExecute
                    This affects pods that are already running on the node as follows:
                    • Pods that do not tolerate the taint are evicted immediately
                    • Pods that tolerate the taint without specifying tolerationSeconds in their toleration specification remain bound forever
                    • Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. After that time elapses, the node lifecycle controller evicts the Pods from the node.
                    NoSchedule
                    No new Pods will be scheduled on the tainted node unless they have a matching toleration. Pods currently running on the node are not evicted.
                    PreferNoSchedule
                    PreferNoSchedule is a "preference" or "soft" version of NoSchedule. The control plane will try to avoid placing a Pod that does not tolerate the taint on the node, but it is not guaranteed.

                    You can put multiple taints on the same node and multiple tolerations on the same pod. The way Kubernetes processes multiple taints and tolerations is like a filter: start with all of a node's taints, then ignore the ones for which the pod has a matching toleration; the remaining un-ignored taints have the indicated effects on the pod. In particular,

                    • if there is at least one un-ignored taint with effect NoSchedule then Kubernetes will not schedule the pod onto that node
                    • if there is no un-ignored taint with effect NoSchedule but there is at least one un-ignored taint with effect PreferNoSchedule then Kubernetes will try to not schedule the pod onto the node
                    • if there is at least one un-ignored taint with effect NoExecute then the pod will be evicted from the node (if it is already running on the node), and will not be scheduled onto the node (if it is not yet running on the node).

                    For example, imagine you taint a node like this

                    kubectl taint nodes node1 key1=value1:NoSchedule
                    kubectl taint nodes node1 key1=value1:NoExecute
                    kubectl taint nodes node1 key2=value2:NoSchedule
                    

                    And a pod has two tolerations:

                    tolerations:
                    - key: "key1"
                      operator: "Equal"
                      value: "value1"
                      effect: "NoSchedule"
                    - key: "key1"
                      operator: "Equal"
                      value: "value1"
                      effect: "NoExecute"
                    

                    In this case, the pod will not be able to schedule onto the node, because there is no toleration matching the third taint. But it will be able to continue running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod.

                    Normally, if a taint with effect NoExecute is added to a node, then any pods that do not tolerate the taint will be evicted immediately, and pods that do tolerate the taint will never be evicted. However, a toleration with NoExecute effect can specify an optional tolerationSeconds field that dictates how long the pod will stay bound to the node after the taint is added. For example,

                    tolerations:
                    - key: "key1"
                      operator: "Equal"
                      value: "value1"
                      effect: "NoExecute"
                      tolerationSeconds: 3600
                    

                    means that if this pod is running and a matching taint is added to the node, then the pod will stay bound to the node for 3600 seconds, and then be evicted. If the taint is removed before that time, the pod will not be evicted.

                    Example Use Cases

                    Taints and tolerations are a flexible way to steer pods away from nodes or evict pods that shouldn't be running. A few of the use cases are

                    • Dedicated Nodes: If you want to dedicate a set of nodes for exclusive use by a particular set of users, you can add a taint to those nodes (say, kubectl taint nodes nodename dedicated=groupName:NoSchedule) and then add a corresponding toleration to their pods (this would be done most easily by writing a custom admission controller). The pods with the tolerations will then be allowed to use the tainted (dedicated) nodes as well as any other nodes in the cluster. If you want to dedicate the nodes to them and ensure they only use the dedicated nodes, then you should additionally add a label similar to the taint to the same set of nodes (e.g. dedicated=groupName), and the admission controller should additionally add a node affinity to require that the pods can only schedule onto nodes labeled with dedicated=groupName.

                    • Nodes with Special Hardware: In a cluster where a small subset of nodes have specialized hardware (for example GPUs), it is desirable to keep pods that don't need the specialized hardware off of those nodes, thus leaving room for later-arriving pods that do need the specialized hardware. This can be done by tainting the nodes that have the specialized hardware (e.g. kubectl taint nodes nodename special=true:NoSchedule or kubectl taint nodes nodename special=true:PreferNoSchedule) and adding a corresponding toleration to pods that use the special hardware. As in the dedicated nodes use case, it is probably easiest to apply the tolerations using a custom admission controller. For example, it is recommended to use Extended Resources to represent the special hardware, taint your special hardware nodes with the extended resource name and run the ExtendedResourceToleration admission controller. Now, because the nodes are tainted, no pods without the toleration will schedule on them. But when you submit a pod that requests the extended resource, the ExtendedResourceToleration admission controller will automatically add the correct toleration to the pod and that pod will schedule on the special hardware nodes. This will make sure that these special hardware nodes are dedicated for pods requesting such hardware and you don't have to manually add tolerations to your pods.

                    • Taint based Evictions: A per-pod-configurable eviction behavior when there are node problems, which is described in the next section.

                    Taint based Evictions

                    FEATURE STATE: Kubernetes v1.18 [stable]

                    The node controller automatically taints a Node when certain conditions are true. The following taints are built in:

                    • node.kubernetes.io/not-ready: Node is not ready. This corresponds to the NodeCondition Ready being "False".
                    • node.kubernetes.io/unreachable: Node is unreachable from the node controller. This corresponds to the NodeCondition Ready being "Unknown".
                    • node.kubernetes.io/memory-pressure: Node has memory pressure.
                    • node.kubernetes.io/disk-pressure: Node has disk pressure.
                    • node.kubernetes.io/pid-pressure: Node has PID pressure.
                    • node.kubernetes.io/network-unavailable: Node's network is unavailable.
                    • node.kubernetes.io/unschedulable: Node is unschedulable.
                    • node.cloudprovider.kubernetes.io/uninitialized: When the kubelet is started with an "external" cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint.

                    In case a node is to be drained, the node controller or the kubelet adds relevant taints with NoExecute effect. This effect is added by default for the node.kubernetes.io/not-ready and node.kubernetes.io/unreachable taints. If the fault condition returns to normal, the kubelet or node controller can remove the relevant taint(s).

                    In some cases when the node is unreachable, the API server is unable to communicate with the kubelet on the node. The decision to delete the pods cannot be communicated to the kubelet until communication with the API server is re-established. In the meantime, the pods that are scheduled for deletion may continue to run on the partitioned node.

                    You can specify tolerationSeconds for a Pod to define how long that Pod stays bound to a failing or unresponsive Node.

                    For example, you might want to keep an application with a lot of local state bound to node for a long time in the event of network partition, hoping that the partition will recover and thus the pod eviction can be avoided. The toleration you set for that Pod might look like:

                    tolerations:
                    - key: "node.kubernetes.io/unreachable"
                      operator: "Exists"
                      effect: "NoExecute"
                      tolerationSeconds: 6000
                    

                    DaemonSet pods are created with NoExecute tolerations for the following taints with no tolerationSeconds:

                    • node.kubernetes.io/unreachable
                    • node.kubernetes.io/not-ready

                    This ensures that DaemonSet pods are never evicted due to these problems.

                    Taint Nodes by Condition

                    The control plane, using the node controller, automatically creates taints with a NoSchedule effect for node conditions.

                    The scheduler checks taints, not node conditions, when it makes scheduling decisions. This ensures that node conditions don't directly affect scheduling. For example, if the DiskPressure node condition is active, the control plane adds the node.kubernetes.io/disk-pressure taint and does not schedule new pods onto the affected node. If the MemoryPressure node condition is active, the control plane adds the node.kubernetes.io/memory-pressure taint.

                    You can ignore node conditions for newly created pods by adding the corresponding Pod tolerations. The control plane also adds the node.kubernetes.io/memory-pressure toleration on pods that have a QoS class other than BestEffort. This is because Kubernetes treats pods in the Guaranteed or Burstable QoS classes (even pods with no memory request set) as if they are able to cope with memory pressure, while new BestEffort pods are not scheduled onto the affected node.

                    The DaemonSet controller automatically adds the following NoSchedule tolerations to all daemons, to prevent DaemonSets from breaking.

                    • node.kubernetes.io/memory-pressure
                    • node.kubernetes.io/disk-pressure
                    • node.kubernetes.io/pid-pressure (1.14 or later)
                    • node.kubernetes.io/unschedulable (1.10 or later)
                    • node.kubernetes.io/network-unavailable (host network only)

                    Adding these tolerations ensures backward compatibility. You can also add arbitrary tolerations to DaemonSets.

                    Device taints and tolerations

                    Instead of tainting entire nodes, administrators can also taint individual devices when the cluster uses dynamic resource allocation to manage special hardware. The advantage is that tainting can be targeted towards exactly the hardware that is faulty or needs maintenance. Tolerations are also supported and can be specified when requesting devices. Like taints they apply to all pods which share the same allocated device.

                    What's next